33 research outputs found

    Flexible Memory Networks

    Get PDF
    Networks of neurons in some brain areas are flexible enough to encode new memories quickly. Using a standard firing rate model of recurrent networks, we develop a theory of flexible memory networks. Our main results characterize networks having the maximal number of flexible memory patterns, given a constraint graph on the network's connectivity matrix. Modulo a mild topological condition, we find a close connection between maximally flexible networks and rank 1 matrices. The topological condition is H_1(X;Z)=0, where X is the clique complex associated to the network's constraint graph; this condition is generically satisfied for large random networks that are not overly sparse. In order to prove our main results, we develop some matrix-theoretic tools and present them in a self-contained section independent of the neuroscience context.Comment: Accepted to Bulletin of Mathematical Biology, 11 July 201

    Clique topology reveals intrinsic geometric structure in neural correlations

    Get PDF
    Detecting meaningful structure in neural activity and connectivity data is challenging in the presence of hidden nonlinearities, where traditional eigenvalue-based methods may be misleading. We introduce a novel approach to matrix analysis, called clique topology, that extracts features of the data invariant under nonlinear monotone transformations. These features can be used to detect both random and geometric structure, and depend only on the relative ordering of matrix entries. We then analyzed the activity of pyramidal neurons in rat hippocampus, recorded while the animal was exploring a two-dimensional environment, and confirmed that our method is able to detect geometric organization using only the intrinsic pattern of neural correlations. Remarkably, we found similar results during non-spatial behaviors such as wheel running and REM sleep. This suggests that the geometric structure of correlations is shaped by the underlying hippocampal circuits, and is not merely a consequence of position coding. We propose that clique topology is a powerful new tool for matrix analysis in biological settings, where the relationship of observed quantities to more meaningful variables is often nonlinear and unknown.Comment: 29 pages, 4 figures, 13 supplementary figures (last two authors contributed equally

    Diversity of emergent dynamics in competitive threshold-linear networks: a preliminary report

    Full text link
    Threshold-linear networks consist of simple units interacting in the presence of a threshold nonlinearity. Competitive threshold-linear networks have long been known to exhibit multistability, where the activity of the network settles into one of potentially many steady states. In this work, we find conditions that guarantee the absence of steady states, while maintaining bounded activity. These conditions lead us to define a combinatorial family of competitive threshold-linear networks, parametrized by a simple directed graph. By exploring this family, we discover that threshold-linear networks are capable of displaying a surprisingly rich variety of nonlinear dynamics, including limit cycles, quasiperiodic attractors, and chaos. In particular, several types of nonlinear behaviors can co-exist in the same network. Our mathematical results also enable us to engineer networks with multiple dynamic patterns. Taken together, these theoretical and computational findings suggest that threshold-linear networks may be a valuable tool for understanding the relationship between network connectivity and emergent dynamics.Comment: 12 pages, 9 figures. Preliminary repor

    The combinatorial code and the graph rules of Dale networks

    Full text link
    We describe the combinatorics of equilibria and steady states of neurons in threshold-linear networks that satisfy the Dale's law. The combinatorial code of a Dale network is characterized in terms of two conditions: (i) a condition on the network connectivity graph, and (ii) a spectral condition on the synaptic matrix. We find that in the weak coupling regime the combinatorial code depends only on the connectivity graph, and not on the particulars of the synaptic strengths. Moreover, we prove that the combinatorial code of a weakly coupled network is a sublattice, and we provide a learning rule for encoding a sublattice in a weakly coupled excitatory network. In the strong coupling regime we prove that the combinatorial code of a generic Dale network is intersection-complete and is therefore a convex code, as is common in some sensory systems in the brain.Comment: 22 pages, 4 figures, added discussion section, corrected typos, expanded the background on convex code

    Cell Groups Reveal Structure of Stimulus Space

    Get PDF
    An important task of the brain is to represent the outside world. It is unclear how the brain may do this, however, as it can only rely on neural responses and has no independent access to external stimuli in order to ‘‘decode’’ what those responses mean. We investigate what can be learned about a space of stimuli using only the action potentials (spikes) of cells with stereotyped—but unknown—receptive fields. Using hippocampal place cells as a model system, we show that one can (1) extract global features of the environment and (2) construct an accurate representation of space, up to an overall scale factor, that can be used to track the animal’s position. Unlike previous approaches to reconstructing position from place cell activity, this information is derived without knowing place fields or any other functions relating neural responses to position. We find that simply knowing which groups of cells fire together reveals a surprising amount of structure in the underlying stimulus space; this may enable the brain to construct its own internal representations

    Cell Groups Reveal Structure of Stimulus Space

    Get PDF
    An important task of the brain is to represent the outside world. It is unclear how the brain may do this, however, as it can only rely on neural responses and has no independent access to external stimuli in order to “decode” what those responses mean. We investigate what can be learned about a space of stimuli using only the action potentials (spikes) of cells with stereotyped—but unknown—receptive fields. Using hippocampal place cells as a model system, we show that one can (1) extract global features of the environment and (2) construct an accurate representation of space, up to an overall scale factor, that can be used to track the animal's position. Unlike previous approaches to reconstructing position from place cell activity, this information is derived without knowing place fields or any other functions relating neural responses to position. We find that simply knowing which groups of cells fire together reveals a surprising amount of structure in the underlying stimulus space; this may enable the brain to construct its own internal representations

    Understanding short-timescale neuronal firing sequences via bias matrices

    Get PDF
    The brain generates persistent neuronal firing sequences across varying timescales. The short-timescale (~100ms) sequences are believed to be crucial in the formation and transfer of memories. Large-amplitude local field potentials known as sharp-wave ripples (SWRs) occur irregularly in hippocampus when an animal has minimal interaction with its environment, such as during resting, immobility, or slow-wave sleep. SWRs have been long hypothesized to play a critical role in transferring memories from the hippocampus to the neocortex [1]. While sequential firing during SWRs is known to be biased by the previous experiences of the animal, the exact relationship of the short-timescale sequences during SWRs and longer-timescale sequences during spatial and nonspatial behaviors is still poorly understood. One hypothesis is that the sequences during SWRs are “replays” or “preplays” of “master sequences”, which are sequences that closely mimic the order of place fields on a linear track [2,3]. Rather than particular hard-coded “master” sequences, an alternative explanation of the observed correlations is that similar sequences arise naturally from the intrinsic biases of firing between pairs of cells. To distinguish these and other possibilities, one needs mathematical tools beyond the center-of-mass sequences and Spearman’s rank-correlation coefficient that are currently used
    corecore